Artificial Intelligence and language models like ChatGPT have transformed the way we interact with technology. With these advancements come new challenges and risks, particularly when it comes to maintaining free, neutral knowledge on platforms like Wikipedia.

Wikipedia is the largest online encyclopaedia. It relies on human editors who create, edit, and review articles to ensure that they are accurate, neutral, and free from bias. However, the rise of tools like ChatGPT raises concerns about their impact on both the quality and accuracy of Wikipedia articles, as well as their role in the spread of misinformation.

One of the main risks is the potential for bias and misinformation. ChatGPT and like models learn from the data they are fed, which means that if that data is biased or untrue, the generated results will also be biased and untrue. This biased and inaccurate information can then be used to create or edit Wikipedia articles, leading to the spread of misinformation and undermining the credibility of Wikipedia as a trusted source of knowledge.

To address these risks, Wikipedia’s human editors are essential protectors of free knowledge. While AI and language models can be useful tools, they should not replace the critical thinking, judgment, and expertise of human editors. Human editors ensure that articles are accurate, neutral, and free from bias. They can also identify and address potential issues and inconsistencies in the information presented.

Human editors— and the Wikimedia associations that support them— are crucial to ensure that Wikipedia remains a trusted and reliable source of knowledge.

As a non-profit organisation, Wikimedia CH relies on donations from individuals and organisations who share our mission to protect and disseminate free knowledge. To support our community of human editors and volunteers in their fight for free knowledge, please consider making a tax-deductible donation by clicking the button below.

 

Picture: Human Vs Robot / iStock.com/imaginima